perm filename AI.80[AM,DBL] blob sn#376644 filedate 1978-08-31 generic text, type T, neo UTF8
                          AI in the 80s
 
 
These are a few personal observations; please DO NOT circulate!
 
Bruce has written an excellent inverted-pyramid story about the
meeting; my account will therefore help fill you in by being more
chronological.
 
The morning (and some of the afternoon) of the first day was consumed
by each participant giving a soft-seven-minute statement.  Unfortunately,
no one was told what the purpose, orientation, or even TOPIC of that
statement was to be.  Here are the gists of those statements:

Nils:  Good applications exist already, and we'll see more in he 80s.
       But we need to develop special projects for non-applied research:
              in chess, 1st-rate automatic programming, Advice taker,
              natural lang (eg, the Travel Consultant), and (of course)
              general purpose robots (even ones which won't replace
                    people doing specific tasks in military or market).

Cordell:  What we need is "Enlightened Management".  Let's send Nils to
       Washington for 10 years, to guarantee for us a fixed level of
       funding.  Must do much planning at the beginning.
  [Denicoff:  Danger in any 10-year project, Cordell, is that what
       start out as speculaive goals end up as almighty specs.  You
       end up sacrificing research for program performance along those
       specified lines.  This is even worse if several groups are competing]
       Large system building is the key vehicle for progress.  Why?
       Because the inherent complexity in the tasks we work on means you
       can't easily isolate individual issues for separate study. 
       I see a big push toward AP (the knowledge acquisition coming via
       incremental knowl. changes) and multi-capability systems, in the 80s.

Bob Balzer:  Programs that re-program existing large systems
       Interpretively, we like to add a rule to a PS, but for performance
       reasons we must compile our PSs into more efficient code.

Peter Hart:    People operate at the most compiled level possible at each
      moment; but they alsways CAN revert to a more "interpretive" level,
      eg by inrospecting, to check their actions.
      The implication (for AI int he 80s) is that our programs must have
      multiple levels of knowledge (levels of abstractness).
      In addition, they must not be so fragile and inextensible.

[Marvin:  But what about the difficulty AI has eliciting knowledge from xperts?]

Bill Woods:   YES!  There is a real danger that our "introspections" are
      merely rationalizations (at best, the way we'd like to have done
      something; at worst, fabrications because of our inability to
      express, in linear natural language, descriptions of processes
      that go on in our brain in parallel).

Ira Goldstein:  Similarly, we don't have good methodologies for how children
      acquire expertise.

Doug Lenat:
      1.  Areas of greatest potential impact in the 80s
          a. areas which have such a great potential impact on Mankind that
             almost ANY effect by AI upon hem would be important
            (i) Molecular genetics:  move from planning to proposing and
                simulating expts.
            (ii) Energy:  quest for existing resources, 
                          planning and effective management of existing ones
            (iii) AI itself:  AI asst. programs which aid in the construction
                          of new, large, AI KB programs.  DWIM*. etc
          b.  areas which impact on large numbers of people individually
            (i) Home video games,  other "intelligent home" applications
                [Nils:  Yes.  Will lead into home video INSTRUCTION]
            (ii) Intelligent medical instruments, gas pumps, etc.
            (iii)  Spinoffs and direct appliations of AI in education
          c. areas in which AI attains Superhuman performance
             (hence the absolute import of the area is not as crucial)
            I believe this will come about via synergy, from expert rule-based
            programs, whose data bases contain v. many rules, elicited from
            many experts.
          d. areas whose very existence is owed to AI
            (i) eg, in the past, we saw computational parts of IPS open up
            (ii) My work in he field of "Heuristic", eg.
                  How are heurs. discovered?  What does their "space" look
                  like?  How could we predict the power of Dendral, except
                  by running it?
            (iii) areas not yet dreamed of  <this space left blank>
                   ...
      2.  Impediments to that progress
            a. Not enough cycles in the world
               Answer #1:  soon alleviated by LISP machines, PDP10s on a
                           chip, etc.
               Answer #2:  as long as large communities time-share, the load
                           avg will rise to the point where one just barely
                           cannot get useful work doen in the daytime.
               Answer #3:  how many AI projects do you know where the research
                           -- not necessarily the performance -- is cycle-limited?
             b. Not enough ideas in the world (in AI, particularly)
               Most of us in this room are idea-rich; we need only the time
                to investigate things.  The impression of paucity comes from,
                eg, IJCAI, where we see the 80/20 rule in action  (80 percent
                of the worthwhile stuff is done by 20% of us).
                [Danny:  more like 99/1]
             c.  The field is fragmented
                This is real, but not a problem.  I have little to learn from,
                eg, audition;  less than from, say, mathematics or psychology.
                At IJCAI, I was not bothered by the six simul. sessions,
                because my interests are not wide enough to make me want to
                 attend all the talks anyway.   Math often gets its power
                by applying an idea/theorem from one are in a very remote
                area; this is not common in AI.  The appendages of AI today
                 do not interact syengetically.  To lump Robot arm coord. and
                 Story-Understanding together is like confusing the study of
                 Cardboard Box-Making with the study of Cereal nutrition.
                 [Earl:  Isn't there a "core" of AI, then?]
                 The only common ground is weak methods, and that's the reason
                 we have so little synergy: weak methods are weak.  It will
                 be like physics:  weak methods are the "calculus", and
                 the subfields will be studied separately, as "Mechanics"
                 "Electromagnetism", etc. are now.
                 [Peter Hart:  How many depts. of AI will each university
                 have, then, by 1990?]  One, just like in physics.
              d. Not enough money
                 This locally seems true, but is not the real culprit.
                 At most, the danger is that AI will work on projects only
                 for the moeny that exists in certain areas.  I think the
                 real problem is:

              e. Not enough people: not enough "Knowledge Acquisition Engineers"
                 Even if we had $100 milion to spend, what would we do?
                 There aren't enough competent researchers being trained now
                 to carry out all the applications mentioned in (1) above.
                 As Ed Feigenbaum said, 15 years ago the govt. gave grants for
                 training, not merely research.  We need to think about getting
                  that started again.   [Denicoff agrees]
                 Marvin started this meeting by reading Newell&Simon's famous
                 overoptimistic claims for AI in he 60's; the big mistake they
                 made, they now feel, was in grossly Overestimating the lure
                 of the field.  Instead of the top thousand scientists each
                 year pouring into AI, there was just a mixed trickle.
                 If the predictions I made aren't fulfilled, it will be because
                  there weren't enough KAEs around in the 80s.
<break for lunch>

Bruce Buchanan:  Transfer of Expertise --  3 modes 
      1.    a.  end: interactive tuning of large, existing rule base
 			(eg, like Randy's work on asking specific ques. in context)
            b.  middle: get the bulk of the rules into the knowledge base
            c.  front:  specify a language: a representation of knowledge,
                  a specified inference engine, etc.
      2.   By example  (eg MetaDendral)
      3.   Build only an apprentice; allow the human to retain all the
                expertise, judgment, choice, processing,... in his own mind.
                Programs woul contain only menial domain-independent kinds
                of assistance (eg bookeeping); could still be useful.

Earl Sacerdoti: The key will be Distributed Intelligence
    Apects: Machine-Machine Interface (just as last decade we focused
             on Man-Machine Interface).  
            Distributed operating systems (composed of local specialists).
            Non-antropomorphic robot (many specialized mini-robots)

Bob Moore:  Theory is the only path to Salvation for our field.  Repent!
        A good ex. of theory is Mitch Marcus' recent thesis
        [[I, Doug, believe the theory here is that English can be parsed
          left-to-right, with no backup, with  memory (stack) of size 3]]
        How do we evaluate claims like "Program X can represent class C of
          knowledge"?  eg, we might use Tarskian semantics
        We often ignore representing the following sticky kins of knowledge:
          belief, continuous and parallel actions, pictures, substances,
          counterfactual conditionals, action modifiers, control structures
        Instead of tackling the above variety, we work on 8 systems (represen.
          languages) which can represent well only the same small set of relns.
        
Danny Bobrow:  For people, it's usually easier to do something the 2nd time,
        but this is rarely true for programs.  They don't make use of their
        experiences.  This will be one of the big changes inthe future.
        How do you (or a program) characterize X?  The first characterization
        is bound to be imperfect.  But you let the nature of the imperfections
        and discrepancies found guide you toward the next characterization,
         which willhopefully be less inappropriate.


Gary Hendrix:  We're up against boundaries in many areas of research,
       where to get a small improvemen in performance would demand an
       extra hundred or even thousandfold increase in the amount of power
       we'd have to expend...  That is, a hundred times the power we can
       muster now.

Cordell's summary of the morning:  the big themes
     Large research projects, 
     Self-knowledge, mult. levels of knowl.
     Knowledge acquis
     Question of the very paradigm itself of "Questioning the experts"
     Watch kids to get evolution of knowledge
     Repr of knowledge
     Time spent "getting going" vs "attacking the real issues"

<end of the logicl Morning session -- actually mid-afternoon!>
  
The afternoon of the first day was schduled to be spent talking about
AI technology.  We ended up discussing the representation of
action modifiers:  "walk slowly", "walk carefully", etc.  Everyone
was either silent, or actively vociferous about how one couldn't
represent such things effectively (ie, in a general way, yet such
that the operations one wants to do on them -- like Run them --
could be done quickly).  The silent folk, including myself,
perhaps knew how to do the repr.  I spent the time making a big
diagram representing the above two action modifiers.  I'll show
it to you when you get back if you want, but I suspect that you
would have been silent.  As Mark Stefik said later when I showed it
to him, "There's nothing unusual about your attitude for HPP:
while you sweated and demystified it, others preferred to preserve
the mystery".

<end of first day>
For "homework", we were to prepare a short statement on our plans for
our own personal research over the next ten years.  This was to be presented
during the afternoon session on Tuesday.

Tuesday moning was devoted to Applications of AI.

First we touched on heuristics for choosing domains of application.
  Three people simult. said "Ed has a 45min talk on this".
  Peter & Earl: The expertise must already exist in the world,
      else you're forced to work in field X, not in AI.
  Cordell:  an AI pgm which evaluates AI pgms would be good applic.
  Bill Woods: long list of applications
    a. Applications related to "knowledge fusion" (correl. multi sensors)
        perception in the midst of noise
        stereovision, movies perception, correlating sound & motion,
        developing a consensus out of multiple descriptions of the very
        same object or event
    b.  Applications of knowledge use, representation, and management
        algorithms expert (library of algorithms and problems.
            Given a problem, find suitable algorithms for it.
        History machine [[as far as I could see, not much more than LISPX
            already gives you.  Bill was, I guess, suggesting that the
            program use this data nontrivially.  Or, I might have missed
            the idea; it might be more like an inelligent scientists's lab
            notebook, a log of his thoughts, tries, failures, thoughts,...]]
        Instructable system: capable of assimilating new rules [[familiar??]]
        Knowledge sponge:  a simulated 3-year-old [[sounds even MORE familiar]]
            [[Bill combines some brilliant insights with some naivete]]
       
  Discussion of Bill's list:
       Pointers to him to look into, about many of his ideas.
       Peter Hart:  I know an "expert" on crap shooting.  You could get
          a hunred rules from him, and you WOULD duplicate his behavior,
          but so what -- the rules aren't "real" knowledge, just superstition.
       Marty tenenbaum:  Bill, introspect on he generator of that list.
       Bill: Certain research topics lie on the critical path to a goal.
          If we want to pursue research X, we should choose a goal on whose
          critical path it lies.  Otherwise, we'll be tempted to finesse
          the research, since it's nonessential to "meeting the specs".

  Cordell:  an automated anti-ballistic missle system for this country.
      Prefer an intelligent button to a dumb one.

  Denicoff:  automated maintenance system
             creativity-booster (eg, for a playwright, it would sense his
               mood and alter the lighting, temperature, etc.)
             prostheses
  Doug:  Real, large user models
  Ira:   there is a real shortage of freedom from funding constraints.
  Denicoff:  Maybe just 1 agency should exist for funding CS
  [general disagreement, of course, with that idea]

The next segment of the day was spent by having Eamon Barrett describe
his Intelligent Systems program at NSF.  He went over his budget,
broken into four categories (Vision & Audition, Know Based Sys,
Theorem proving & Automatic Programming, Learning & Problem Solving), for this year, and described
how much was already committed for the next few years.


Category	FY78	FY79	FY80	Requests so far for FY80

Vision & Speech
  vision	800	200	166	488
  NL		750 	183	123	 22
  Patt Recog	150	190	  0	616
  Music		150	155	  0	146

Knowledge Based Systems
(mainly MOLGEN)	200	 80	  0	  0

TP & AP		200	226	108	444

Computer simulation of learning, problem solving, and cognition
		250	278	125	121

Budget is now 2.5 M, and will increase slightly annually.
 

Finally, in the afternoon, we found out why we were there.
We never did get to give those brief descriptions of what we each
planned for the next decade.  Instead, the afternoon was spent
discussing a bombshell proposal by Marvin, apparently initiated by
ARPA and seconded by NSF:  namely, that those two get together
(.5 M from Intell Sys, 1 M from other NSF, and 3 M roughly from ARPA,
all this annually, for 5 years; the NSF money would be shifted
but the ARPA money would be new infusion into AI support).  Bruce
described this key phase very well, so I will just mention a couple
details.

I pushed for work toward a huge data base, one that "we all" could
use.  This meant work on represenation, developing a language,
probably developing several standard representations and techniques
for (sometimes) converting from one to another, assembling large
amounts of real-world data, common sense, etc. into such a DB,...
Many people found this too brittle, but liked it better when Ira
focused on just the Uniform Representation Language.  They liked it
even better when Marvin broadened the umbrella to KR in general,
and they all felt they could be funded for the next five years.
I made the point that choosing KR had the benefit that after the
five years were up, something would exist that we could use, whereas
choosing, say, speech, might provide a nice umbrella but after the
project ended there would be little of use to a large fraction of the
AI community.  This irreverence toward the myriad benefits of the
speech project raised quite a howl (I think those benefits are
mere hearsay) -- especially in that group.

Bill Woods' main objection was that there is no easy way to evaluate
how well a KR project is doing and has done.  But the two funders
remained adamant in insisting that ARPA (Fossum personally) was 
interested in supporting basic AI research.
I will be at the S. Calif. conference next Friday in LA
(that's one reason it's being held then).  If you're interested,
it's at ISI from 9-5 -- contact balzer@isi.  Nils and Bill Woods
will be at the E. Coast one next month, along with the funders.

The funders want to approach Fossum directly, not via Conn or
Russel.  They claim that the project managers (through habituation)
are more insecure about non-mission-oriented projects than Fossum
is.  I think you shoul take an interest in this (ie do something)
to avoid (i) a screwup in which the extra money is lost completely,
and also to avoid (ii) another 5year speech project like the last one.


See you after labor day (ie, Tuesday)...   I'm leaving Tuesday afternoon
for Rand, and am seeing Barrett and Stefik all day Tuesday, so let's
talk Tuesday morning (9:30-10) or sometime Monday, or by phone before then.
In case you forgot, my home no. is 965-1228.   Glad you're having a good time.


Doug